282 research outputs found

    Rethinking Image Forgery Detection via Contrastive Learning and Unsupervised Clustering

    Full text link
    Image forgery detection aims to detect and locate forged regions in an image. Most existing forgery detection algorithms formulate classification problems to classify pixels into forged or pristine. However, the definition of forged and pristine pixels is only relative within one single image, e.g., a forged region in image A is actually a pristine one in its source image B (splicing forgery). Such a relative definition has been severely overlooked by existing methods, which unnecessarily mix forged (pristine) regions across different images into the same category. To resolve this dilemma, we propose the FOrensic ContrAstive cLustering (FOCAL) method, a novel, simple yet very effective paradigm based on contrastive learning and unsupervised clustering for the image forgery detection. Specifically, FOCAL 1) utilizes pixel-level contrastive learning to supervise the high-level forensic feature extraction in an image-by-image manner, explicitly reflecting the above relative definition; 2) employs an on-the-fly unsupervised clustering algorithm (instead of a trained one) to cluster the learned features into forged/pristine categories, further suppressing the cross-image influence from training data; and 3) allows to further boost the detection performance via simple feature-level concatenation without the need of retraining. Extensive experimental results over six public testing datasets demonstrate that our proposed FOCAL significantly outperforms the state-of-the-art competing algorithms by big margins: +24.3% on Coverage, +18.6% on Columbia, +17.5% on FF++, +14.2% on MISD, +13.5% on CASIA and +10.3% on NIST in terms of IoU. The paradigm of FOCAL could bring fresh insights and serve as a novel benchmark for the image forgery detection task. The code is available at https://github.com/HighwayWu/FOCAL

    Generalizable Synthetic Image Detection via Language-guided Contrastive Learning

    Full text link
    The heightened realism of AI-generated images can be attributed to the rapid development of synthetic models, including generative adversarial networks (GANs) and diffusion models (DMs). The malevolent use of synthetic images, such as the dissemination of fake news or the creation of fake profiles, however, raises significant concerns regarding the authenticity of images. Though many forensic algorithms have been developed for detecting synthetic images, their performance, especially the generalization capability, is still far from being adequate to cope with the increasing number of synthetic models. In this work, we propose a simple yet very effective synthetic image detection method via a language-guided contrastive learning and a new formulation of the detection problem. We first augment the training images with carefully-designed textual labels, enabling us to use a joint image-text contrastive learning for the forensic feature extraction. In addition, we formulate the synthetic image detection as an identification problem, which is vastly different from the traditional classification-based approaches. It is shown that our proposed LanguAge-guided SynThEsis Detection (LASTED) model achieves much improved generalizability to unseen image generation models and delivers promising performance that far exceeds state-of-the-art competitors by +22.66% accuracy and +15.24% AUC. The code is available at https://github.com/HighwayWu/LASTED

    Low voltage polymer network liquid crystal for infrared spatial light modulators

    Get PDF
    We report a low-voltage and fast-response polymer network liquid crystal (PNLC) infrared phase modulator. To optimize device performance, we propose a physical model to understand the curing temperature effect on average domain size. Good agreement between model and experiment is obtained. By optimizing the UV curing temperature and employing a large dielectric anisotropy LC host, we have lowered the 2 pi phase change voltage to 22.8V at 1.55 mu m wavelength while keeping response time at about 1 ms. Widespread application of such a PNLC integrated into a high resolution liquid-crystal-on-silicon (LCoS) for infrared spatial light modulator is foreseeable

    Generating Robust Adversarial Examples against Online Social Networks (OSNs)

    Full text link
    Online Social Networks (OSNs) have blossomed into prevailing transmission channels for images in the modern era. Adversarial examples (AEs) deliberately designed to mislead deep neural networks (DNNs) are found to be fragile against the inevitable lossy operations conducted by OSNs. As a result, the AEs would lose their attack capabilities after being transmitted over OSNs. In this work, we aim to design a new framework for generating robust AEs that can survive the OSN transmission; namely, the AEs before and after the OSN transmission both possess strong attack capabilities. To this end, we first propose a differentiable network termed SImulated OSN (SIO) to simulate the various operations conducted by an OSN. Specifically, the SIO network consists of two modules: 1) a differentiable JPEG layer for approximating the ubiquitous JPEG compression and 2) an encoder-decoder subnetwork for mimicking the remaining operations. Based upon the SIO network, we then formulate an optimization framework to generate robust AEs by enforcing model outputs with and without passing through the SIO to be both misled. Extensive experiments conducted over Facebook, WeChat and QQ demonstrate that our attack methods produce more robust AEs than existing approaches, especially under small distortion constraints; the performance gain in terms of Attack Success Rate (ASR) could be more than 60%. Furthermore, we build a public dataset containing more than 10,000 pairs of AEs processed by Facebook, WeChat or QQ, facilitating future research in the robust AEs generation. The dataset and code are available at https://github.com/csjunjun/RobustOSNAttack.git.Comment: 26 pages, 9 figure

    Tuning the correlated color temperature of white LED with a guest-host liquid crystal

    Get PDF
    We demonstrate an electro-optic method to tune the correlated color temperature (CCT) of white light-emitting-diode (WLED) with a color conversion film, consisting of fluorescent dichroic dye doped in a liquid crystal host. By controlling the molecular reorientation of dichroic dyes, the power ratio of the transmitted blue and red lights of the white light can be accurately manipulated, resulting in different CCT. In a proof-of-concept experiment, we showed that the CCT of a yellow phosphor-converted WLED can be tuned from 3200 K to 4100 K. With further optimizations, the tuning range could be enlarged to 2500 K with fairly good color performance: luminous efficacy of radiation (LER) \u3e 300 lm/W, color rendering index (CRI) \u3e 75, and Duv \u3c 0.005. Besides, the operation voltage is lower than 5 V and good angular color uniformity is achieved with remote-phosphor coating. This approach is promising for next generation smart lighting
    • …
    corecore